visibility condition
EZREAL: Enhancing Zero-Shot Outdoor Robot Navigation toward Distant Targets under Varying Visibility
Zeng, Tianle, Peng, Jianwei, Ye, Hanjing, Chen, Guangcheng, Luo, Senzi, Zhang, Hong
Zero-shot object navigation (ZSON) in large-scale outdoor environments faces many challenges; we specifically address a coupled one: long-range targets that reduce to tiny projections and intermittent visibility due to partial or complete occlusion. We present a unified, lightweight closed-loop system built on an aligned multi-scale image tile hierarchy. Through hierarchical target-saliency fusion, it summarizes localized semantic contrast into a stable coarse-layer regional saliency that provides the target direction and indicates target visibility. This regional saliency supports visibility-aware heading maintenance through keyframe memory, saliency-weighted fusion of historical headings, and active search during temporary invisibility. The system avoids whole-image rescaling, enables deterministic bottom-up aggregation, supports zero-shot navigation, and runs efficiently on a mobile robot. Across simulation and real-world outdoor trials, the system detects semantic targets beyond 150m, maintains a correct heading through visibility changes with 82.6% probability, and improves overall task success by 17.5% compared with the SOTA methods, demonstrating robust ZSON toward distant and intermittently observable targets.
AI Meets Maritime Training: Precision Analytics for Enhanced Safety and Performance
Traditional simulator-based training for maritime professionals is critical for ensuring safety at sea but often depends on subjective trainer assessments of technical skills, behavioral focus, communication, and body language, posing challenges such as subjectivity, difficulty in measuring key features, and cognitive limitations. Addressing these issues, this study develops an AI-driven framework to enhance maritime training by objectively assessing trainee performance through visual focus tracking, speech recognition, and stress detection, improving readiness for high-risk scenarios. The system integrates AI techniques, including visual focus determination using eye tracking, pupil dilation analysis, and computer vision; communication analysis through a maritime-specific speech-to-text model and natural language processing; communication correctness using large language models; and mental stress detection via vocal pitch. Models were evaluated on data from simulated maritime scenarios with seafarers exposed to controlled high-stress events. The AI algorithms achieved high accuracy, with ~92% for visual detection, ~91% for maritime speech recognition, and ~90% for stress detection, surpassing existing benchmarks. The system provides insights into visual attention, adherence to communication checklists, and stress levels under demanding conditions. This study demonstrates how AI can transform maritime training by delivering objective performance analytics, enabling personalized feedback, and improving preparedness for real-world operational challenges.
- Europe > Croatia > Split-Dalmatia County > Split (0.07)
- Asia > Singapore (0.05)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- Education (1.00)
- Transportation (0.95)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.48)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Speech > Speech Recognition (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Low-Cost Infrared Vision Systems for Improved Safety of Emergency Vehicle Operations Under Low-Visibility Conditions
Naddaf-Sh, M-Mahdi, Lee, Andrew, Yen, Kin, Amini, Eemon, Soltani, Iman
This study investigates the potential of infrared (IR) camera technology to enhance driver safety for emergency vehicles operating in low-visibility conditions, particularly at night and in dense fog. Such environments significantly increase the risk of collisions, especially for tow trucks and snowplows that must remain operational in challenging conditions. Conventional driver assistance systems often struggle under these conditions due to limited visibility. In contrast, IR cameras, which detect the thermal signatures of obstacles, offer a promising alternative. The evaluation combines controlled laboratory experiments, real-world field tests, and surveys of emergency vehicle operators. In addition to assessing detection performance, the study examines the feasibility of retrofitting existing Department of Transportation (DoT) fleets with cost-effective IR-based driver assistance systems. Results underscore the utility of IR technology in enhancing driver awareness and provide data-driven recommendations for scalable deployment across legacy emergency vehicle fleets.
- North America > United States > California > Yolo County > Davis (0.15)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (10 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.87)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Vision (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
From Fog to Failure: How Dehazing Can Harm Clear Image Object Detection
This study explores the challenges of integrating human visual cue-based dehazing into object detection, given the selective nature of human perception. While human vision adapts dynamically to environmental conditions, computational dehazing does not always enhance detection uniformly. We propose a multi-stage framework where a lightweight detector identifies regions of interest (RoIs), which are then enhanced via spatial attention-based dehazing before final detection by a heavier model. We analyze this phenomenon, investigate possible causes, and offer insights for designing hybrid pipelines that balance enhancement and detection. Our findings highlight the need for selective preprocessing and challenge assumptions about universal benefits from cascading transformations. Low-visibility conditions, such as rain, snow, fog, smoke, and haze, pose significant challenges for deep learning applications in autonomous vehicles, security and surveillance, maritime navigation, and agricultural robotics. Under these conditions, object detection models struggle due to reduced contrast and obscured features, leading to performance degradation. This study proposes a deep learning framework inspired by human visual perception to enhance object recognition in adverse visibility scenarios, particularly in foggy environments. A key motivation for this work comes from the impact of poor visibility on airport operations, where disruptions in taxiing and docking cause delays and increase reliance on ground support.
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New York > Monroe County > Rochester (0.04)
- (6 more...)
A week ago, Tesla showcased futuristic robotaxis. Then a pedestrian got hit.
The United States government's road safety agency is again investigating Tesla's "Full Self-Driving" system, this time after getting reports of crashes in low-visibility conditions, including one that killed a pedestrian. The National Highway Traffic Safety Administration says in documents that it opened the probe on Oct. 17 with the company reporting four crashes after Teslas entered areas of low visibility, including sun glare, fog, and airborne dust. In addition to the pedestrian's death, another crash involved an injury, the agency said. Investigators will look into the ability of "Full Self-Driving" to "detect and respond appropriately to reduced roadway visibility conditions, and if so, the contributing circumstances for these crashes." The investigation covers roughly 2.4 million Teslas from the 2016 through 2024 model years.
- Transportation > Ground > Road (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Soil Sample Search in Partially Observable Environments
Abstract-- To work in unknown outdoor environments, autonomous sampling machines need the ability to target samples despite limited visibility and robotic arm reach distance. We design a heuristic guided search method to speed up the search process and more efficiently localize the approximate center of soil regions. Through simulation experiments, we assess the effectiveness of the proposed algorithm and discover superior performance in terms of speed, distance traveled, and success rate compared to naive baselines. I. INTRODUCTION In this paper, we address the problem of autonomous sample collection in outdoor, unknown environments. While Figure 1: In this example, a robot--perhaps a camera mounted collecting soil or similar organic material, there are no end effector of a robotic arm--uses a heuristic method to guarantees that samples will be reachable, visible, or even search for the center of a soil region in a sample distribution. For this reason, a robot needs an effective search task The circle is the start position, and the star indicates the to locate and move sufficiently close to the samples prior to center which the agent aims to reach.
- North America > United States > Virginia > Fairfax County > Reston (0.04)
- North America > United States > California > Monterey County > Monterey (0.04)
- North America > United States > California > Los Angeles County > Pasadena (0.04)
- (7 more...)
Robotic Detection and Estimation of Single Scuba Diver Respiration Rate from Underwater Video
Kutzke, Demetrious T., Sattar, Junaed
Human respiration rate (HRR) is an important physiological metric for diagnosing a variety of health conditions from stress levels to heart conditions. Estimation of HRR is well-studied in controlled terrestrial environments, yet robotic estimation of HRR as an indicator of diver stress in underwater for underwater human robot interaction (UHRI) scenarios is to our knowledge unexplored. We introduce a novel system for robotic estimation of HRR from underwater visual data by utilizing bubbles from exhalation cycles in scuba diving to time respiration rate. We introduce a fuzzy labeling system that utilizes audio information to label a diverse dataset of diver breathing data on which we compare four different methods for characterizing the presence of bubbles in images. Figure 1: Robotic estimation of diver respiration rate during Ultimately we show that our method is effective at estimating a closed-water evaluation of the proposed detection HRR by comparing the respiration rate output system. The Aqua autonomous underwater vehicle [8] is with human analysts.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Oceania > Australia (0.04)
- North America > Barbados (0.04)
- (7 more...)
This AR Helmet Lets Helicopter Pilots 'Own The Sky' - VRScout
Pilots can actually see through the body of the aircraft as they fly. International technology company Elbit Systems this week unveiled a helicopter vision suite for military helicopters that uses a combination of augmented reality (AR), machine learning, and artificial intelligence to provide pilots better visibility while in degraded visibility conditions. The system is powerful, according to the company, that pilots can actually see through the aircraft as they fly. The platform is built around three primary components: an AR head-mounted display (HMD), an AI-powered mission computer, and an array of sensor systems, including the Xplore radar and BrightNite multispectral payload, which includes day and Infra-Red cameras for thermal vision. These sensors are mounted to the nose of the plane and are used to generate a virtual map of the local terrain, including any obstacles such as powerlines or antennas.
- Transportation > Air (1.00)
- Aerospace & Defense > Aircraft (1.00)